35 research outputs found

    A Pattern Calculus for Rule Languages: Expressiveness, Compilation, and Mechanization (Artifact)

    Get PDF
    This artifact contains the accompanying code for the ECOOP 2015 paper: "A Pattern Calculus for Rule Languages: Expressiveness, Compilation, and Mechanization". It contains source files for a full mechanization of the three languages presented in the paper: CAMP (Calculus for Aggregating Matching Patterns), NRA (Nested Relational Algebra) and NNRC (Named Nested Relational Calculus). Translations between all three languages and their attendant proofs of correctness are included. Additionally, a mechanization of a type system for the main languages is provided, along with bidirectional proofs of type preservation and proofs of the time complexity of the various compilers

    A Pattern Calculus for Rule Languages: Expressiveness, Compilation, and Mechanization

    Get PDF
    This paper introduces a core calculus for pattern-matching in production rule languages: the Calculus for Aggregating Matching Patterns (CAMP). CAMP is expressive enough to capture modern rule languages such as JRules, including extensions for aggregation. We show how CAMP can be compiled into a nested-relational algebra (NRA), with only minimal extension. This paves the way for applying relational techniques to running rules over large stores. Furthermore, we show that NRA can also be compiled back to CAMP, using named nested-relational calculus (NNRC) as an intermediate step. We mechanize proofs of correctness, program size preservation, and type preservation of the translations using modern theorem-proving techniques. A corollary of the type preservation is that polymorphic type inference for both CAMP and NRA is NP-complete. CAMP and its correspondence to NRA provide the foundations for efficient implementations of rules languages using databases technologies

    Ariadne: Analysis for Machine Learning Program

    Full text link
    Machine learning has transformed domains like vision and translation, and is now increasingly used in science, where the correctness of such code is vital. Python is popular for machine learning, in part because of its wealth of machine learning libraries, and is felt to make development faster; however, this dynamic language has less support for error detection at code creation time than tools like Eclipse. This is especially problematic for machine learning: given its statistical nature, code with subtle errors may run and produce results that look plausible but are meaningless. This can vitiate scientific results. We report on Ariadne: applying a static framework, WALA, to machine learning code that uses TensorFlow. We have created static analysis for Python, a type system for tracking tensors---Tensorflow's core data structures---and a data flow analysis to track their usage. We report on how it was built and present some early results

    Extending Stan for Deep Probabilistic Programming

    Full text link
    Stan is a popular declarative probabilistic programming language with a high-level syntax for expressing graphical models and beyond. Stan differs by nature from generative probabilistic programming languages like Church, Anglican, or Pyro. This paper presents a comprehensive compilation scheme to compile any Stan model to a generative language and proves its correctness. This sheds a clearer light on the relative expressiveness of different kinds of probabilistic languages and opens the door to combining their mutual strengths. Specifically, we use our compilation scheme to build a compiler from Stan to Pyro and extend Stan with support for explicit variational inference guides and deep probabilistic models. That way, users familiar with Stan get access to new features without having to learn a fundamentally new language. Overall, our paper clarifies the relationship between declarative and generative probabilistic programming languages and is a step towards making deep probabilistic programming easier

    I Can Parse You: Grammars for Dialogs

    Get PDF
    Humans and computers increasingly converse via natural language. Those conversations are moving from today\u27s simple question answering and command-and-control to more complex dialogs. Developers must specify those dialogs. This paper explores how to assist developers in this specification. We map out the staggering variety of applications for human-computer dialogs and distill it into a catalog of flow patterns. Based on that, we articulate the requirements for dialog programming models and offer our vision for satisfying these requirements using grammars. If our approach catches on, computers will soon parse you to better assist you in your daily life
    corecore